5 research outputs found
Recommended from our members
Emergent Typological Effects of Agent-Based Learning Models in Maximum Entropy Grammar
This dissertation shows how a theory of grammatical representations and a theory of learning can be combined to generate gradient typological predictions in phonology, predicting not only which patterns are expected to exist, but also their relative frequencies: patterns which are learned more easily are predicted to be more typologically frequent than those which are more difficult.
In Chapter 1 I motivate and describe the specific implementation of this methodology in this dissertation. Maximum Entropy grammar (Goldwater & Johnson 2003) is combined with two agent-based learning models, the iterated and the interactive learning model, each of which mimics a type of learning dynamic observed in natural language acquisition.
In Chapter 2 I illustrate how this system works using a simplified, abstract example typology, and show how the models generate a bias away from patterns which rely on cumulative constraint interaction ( gang effects ), and a bias away from variable patterns. Both of these biases match observed trends in natural language typology and psycholinguistic experiments.
Chapter 3 further explores the models\u27 bias away from cumulative constraint interaction using an empirical test case: the typology of possible patterns of contrast between two fricatives. This typology yields five possible patterns, the rarest of which is the result of a gang effect. The results of simulations performed with both models produce a bias against the gang effect pattern.
Chapter 4 further explores the models\u27 bias away from variation using evidence from artificial grammar learning experiments, in which human participants show a bias away from variable patterns (e.g. Smith & Wonnacott 2010). This test case was chosen additionally to disambiguate between variable behavior within a lexical item (variation), and variable behavior across lexical items (exceptionality). The results of simulations performed with both learning models are consistent with the observed bias away from variable patterns in humans.
The results of the iterated and interactive learning models presented in this dissertation provide support for the use of this methodology in investigating the typological predictions of linguistic theories of grammar and learning, as well as in addressing broader questions regarding the source of gradient typological trends, and whether certain properties of natural language must be innately specified, or might emerge through other means
Recommended from our members
Investigating the Consequences of Iterated Learning in Phonological Typology
This work builds on previous investigations of the effects of learning biases on gradient typological predictions in phonology. Our previous work (e.g. Hughto and Pater 2017) used an interactive, agent-based learning model and found robust biases against cumulativity effects in weighted-constraint grammars, and towards more deterministic grammars, where one output accumulates majority probability. This work compares the results of using an iterated learning model, in which “parent” agents teach “child” agents in a generational chain, and finds that these biases are present, but less robust across parameter settings. The deterministic bias was only present with longer learning times; the anti-cumulativity bias was more robust, but only emerged with shorter learning times if child agents\u27 initial weights were set to zero (rather than randomly sampled)
Recommended from our members
Learning Exceptionality and Variation with Lexically Scaled MaxEnt
A growing body of research in phonology addresses the representation and learning of variable processes and exceptional, lexically conditioned processes. Linzen et al. (2013) present a MaxEnt model with additive lexical scales to account for data exhibiting both variation and exceptionality. In this paper, we implement a learning model for lexically scaled MaxEnt grammars which we show to be successful across a range of data containing patterns of variation and exceptionality. We also explore how the model\u27s parameters and the rate of exceptionality in the data influence its performance and predictions for novel forms
Faith-UO: Counterfeeding in Harmonic Serialism
[Abstract not available
Analyzing opacity with contextual faithfulness constraints
Phonological opacity is well-studied and there are numerous proposals in the literature which analyze opacity in Optimality-Theoretic grammars. However, many analyses include significant elaborations to the basic architecture of Optimality Theory (Prince & Smolensky 1993/2004) or its serial alternative, Harmonic Serialism (McCarthy 2000). In this paper, we propose a method of analyzing opacity which avoids additional significant enhancements to the basic theory by using faithfulness constraints with input-defined contexts. These constraints bear many similarities to standard positional faithfulness constraints (Beckman 1997; 1998; Lombardi 1999), but the context is input-defined (as in Jesney 2011). Adding context to faithfulness constraints has previously been discussed as a potential solution to counterfeeding opacity, but dismissed on account of potentially creating an overly rich faithfulness theory (McCarthy 2007a). We argue that the analytical potential of these constraints outweighs the potential problems associated with overgeneration. We show that contextual faithfulness constraints can be employed to analyze multiple types of underapplication opacity in parallel OT and multiple types of under- and overapplication opacity in Harmonic Serialism. We discuss the impact of including these constraints in a universal Con and suggest the potential of language-specific constraint induction for mitigating over-generation effects